Static Analysis by Policy Iteration on Relational Domains
نویسندگان
چکیده
We give a new practical algorithm to compute, in finite time, a fixpoint (and often the least fixpoint) of a system of equations in the abstract numerical domains of zones and templates used for static analysis of programs by abstract interpretation. This paper extends previous work on the non-relational domain of intervals to relational domains. The algorithm is based on policy iteration techniques– rather than Kleene iterations as used classically in static analysis– and generates from the system of equations a finite set of simpler systems that we call policies. This set of policies satisfies a selection property which ensures that the minimal fixpoint of the original system of equations is the minimum of the fixpoints of the policies. Computing a fixpoint of a policy is done by linear programming. It is shown, through experiments made on a prototype analyzer, compared in particular to analyzers such as LPInv or the Octagon Analyzer, to be in general more precise and faster than the usual Kleene iteration combined with widening and narrowing techniques.
منابع مشابه
Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes
We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We consider a variant of approximate policy iteration (API) that replaces the usual value-function learning step with a learning step in policy space. This is advantageous in domains where good policies are easier to represent and learn than the corresponding value functions, which is often the case ...
متن کاملPolicy Iteration within Logico-Numerical Abstract Domains
Policy Iteration is an algorithm for the exact solving of optimization and game theory problems, formulated as equations on min max affine expressions. It has been shown that the problem of finding the least fixpoint of semantic equations on some abstract domains can be reduced to such optimization problems. This enables the use of Policy Iteration to solve such equations, instead of the tradit...
متن کاملApproximate Policy Iteration with a Policy Language Bias
We explore approximate policy iteration (API), replacing the usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous technique can solve. In particular, we induce high-quality domain-specific planners for classical planning domains (both deterministic and...
متن کاملApproximate Policy Iteration with a Policy Language Bias (draft)
We explore approximate policy iteration (API), replacing the usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous technique can solve. In particular, we induce high-quality domain-specific planners for classical planning domains (both deterministic and...
متن کاملIntegrating Policy Iterations in Abstract Interpreters
Among precise abstract interpretation methods developed during the last decade, policy iterations is one of the most promising. Despite its efficiency, it has not yet seen a broad usage in static analyzers. We believe the main explanation to this restrictive use, beside the novelty of the technique, lies in its lack of integration in the classic abstract domain framework. This prevents an easy ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007